Solution is categorized as + closed form + infinite series
The general form is
\[ \begin{equation} \frac{dy}{dx} = f(x) g(y) \end{equation} \]
Then rearrange the equation as
\[ \begin{equation} \int \frac{dy}{g(y)} = \int f(x) dx \end{equation} \]
Exact equation is of this form
\[ \begin{equation} A(x, y) dx + B(x, y) dy = 0, \quad \frac{\partial A}{\partial y} = \frac{\partial B}{\partial x} \end{equation} \]
\(A(x, y) dx + B(x, y) dy\) is an exact differential. We restate it as
\[ \begin{equation} A(x, y) dx + B(x, y) dy = dU = \frac{\partial U}{\partial x} dx + \frac{\partial U}{\partial y} dy \end{equation} \]
The above statement requires
\[ \begin{equation} \frac{\partial A}{\partial y} = \frac{\partial B}{\partial x} \end{equation} \]
By apply the following equations,
\[ \begin{align} \frac{\partial U}{\partial x} &= A(x, y) \\ \frac{\partial U}{\partial y} &= B(x, y) \\ \end{align} \]
We obtain \(U(x, y)\). So, the exact equation can be integrated directly.1
\[ \begin{equation} \int dU = C \end{equation} \]
For example, solve the following ODE.
\[ (3x + y) dx + x dy = 0 \]
Regard \(A(x, y) = 3x + y, B(x, y) = x\). Then we get
\[ \begin{align} \frac{\partial U}{\partial x} &= 3x + y \\ \frac{\partial U}{\partial y} &= x \\ \end{align} \]
Integrate the second equation with respect to \(y\), we get \(U(x, y) = xy + f(x)\). Substitute the solution into the first equation, we will get \(y + f^{\prime}(x) = y + 3x\). Obviously, \(U(x, y) = xy + 3/2 x^{2} + C_{0}\). The integration \(\int dU = C_{1}\) is equivalent to \(U(x, y) = C_{1}\). So, the final solution is \(xy + 3/2 x^{2} = C\).
Inexact equation is of this form
\[ \begin{equation} A(x, y) dx + B(x, y) dy = 0, \quad \frac{\partial A}{\partial y} \neq \frac{\partial B}{\partial x} \end{equation} \]
Although the above equation is inexact, it can be exact by multiply an integration factor.
\[ \begin{equation} \frac{\partial \mu A}{\partial y} = \frac{\partial \mu B}{\partial x} \end{equation} \]
Integration factor \(\mu\) cannot be a function of both \(x\) and \(y\). Instead, it is just a function of single \(x\) or single \(y\). Then this equation can be solved by the following method. For instance, \(\mu(x, y)\) is just function of \(x\). Then the above equation is reduced to be
\[ \begin{equation} \mu \frac{\partial A}{\partial y} = \mu \frac{\partial B}{\partial x} + B \frac{d \mu}{dx} \end{equation} \]
Then rearrange it as
\[ \begin{equation} \frac{d \mu}{\mu} = \Big ( \frac{\partial A}{\partial y} - \frac{\partial B}{\partial x} \Big ) \frac{dx}{B} \end{equation} \]
Here, another assumption is \(\Big ( \frac{\partial A}{\partial y} - \frac{\partial B}{\partial x} \Big )\) is just function of single \(x\). Then integrate above equation, we can get the integration factor expression. Substitute into the original equation and get exact equation.
The general form is
\[ \begin{equation} a_{n}(x) \frac{d^{n}y}{dx^{n}} + a_{n-1}(x) \frac{d^{n-1}y}{dx^{n-1}} + ... + a_{1}(x) \frac{dy}{dx} + a_{0}(x) y = f(x) \end{equation} \]
Find complementary solution first and then find particular solution. Combine them.
Whether high order linear ODE is linear or not depends on Wronskian.
The solution is to assume \(y = A e^{\lambda x}\) and substitute into the original equation. Get rid of the exponential part, we are left with the auxiliary equation.
\[ \begin{equation} a_{n}(x) \lambda^{n} + a_{n-1}(x) \lambda^{n-1} + ... + a_{1}(x) \lambda^{1} + a_{0}(x) = 0 \end{equation} \]
Three main cases on the solution of auxiliary equation. + All roots are real and distinct + Some roots are complex + Some roots are repeated
There is no generally applicable method to find a particular solution. However, there are some standard situations where we can employ the method of undetermined coefficients.
We can use the method of Variation of Parameters to find a particular solution of equation in this form.
\[ \begin{equation} y^{\prime \prime} + P(x) y^{\prime} + Q(x) y = 0 \end{equation} \]
Taking Laplace transform method transforms the original ODE into a purely algebraic equation. Once the solution of the algebric equation is obtained, taking inverse Laplace transform will get the solution of the original ODE. Specifically, employ 2 formulas from the Laplace theory to transform the original ODE. And the right side of the equation is also transformed by Laplace transform method, in which circumstances it is always done by referring the standard table. Formulas are as follows
\[ \begin{align} \bar{f}(s) &= \int^{\infty}_{0} e^{-sx} f(x) dx \\ \bar{f^{n}}(s) &= s^{n} \bar{f}(s) - s^{n-1} f(0) - s^{n-2} f^{\prime}(0) - ... - s f^{n-2}(0) - f^{n-1}(0) \\ \end{align} \]
There is no generally applicable method to find solution of this type ODE. Nevertheless, there are some cases in which a solution is possible.
Standard form
\[ \begin{equation} y^{\prime \prime} + P(x) y^{\prime} + Q(x) y = 0 \end{equation} \]
If we have a solution \(y_{1}(x)\), then we can define \(y = u(x) y_{1}(x)\), the derivative \(y^{\prime} = u^{\prime} y_{1} + u y^{\prime}_{1}, y^{\prime \prime} = u^{\prime \prime} y_{1} + 2 u^{\prime} y^{\prime}_{1} + u y^{\prime \prime}_{1}\). Substitute into the original equation
\[ \begin{equation} y^{\prime \prime} + P y^{\prime} + Q y = u ( y^{\prime \prime}_{1} + P y^{\prime}_{1} + Q y_{1} ) + u^{\prime} ( 2 y^{\prime}_{1} + P y_{1} ) + u^{\prime \prime} y_{1} = 0 \end{equation} \]
Define \(w = u^{\prime}\), then
\[ \begin{equation} y_{1} w^{\prime} + ( 2 y^{\prime}_{1} + P y_{1} ) w = 0 \end{equation} \]
Separate variables
\[ \begin{align} \frac{dw}{w} + \frac{2 y^{\prime}_{1}}{y_{1}} dx + P dx &= 0 \\ \int \frac{1}{w} dw + \int \frac{2 y^{\prime}_{1}}{y_{1}} dx + \int P dx &= 0 \end{align} \]
A method for obtaining solutions to linear ODEs in the form of convergent series.
Indeed, some elementary functions can be seen as convergent series and they are given special names such as \(sin x\), \(cos x\) or \(exp x\).
the general form
\[ \begin{align} y^{\prime \prime} + p(x) y^{\prime} + q(x) y = 0 \end{align} \]
The general form of solutions is as follows
\[ \begin{align} y(x) = c_{1} y_{1}(x) + c_{2} y_{2}(x) \end{align} \]
To determine whether \(y_{1}\) and \(y_{2}\) are independent, use Wronskian.
\[ \begin{align} W(x) = \begin{vmatrix} y_{1} & y_{2} \\ y^{\prime}_{1} & y^{\prime}_{2} \\ \end{vmatrix} = y_{1} y^{\prime}_{2} - y_{2} y^{\prime}_{1} \end{align} \]
If Wronskian is not \(0\) everywhere in a given interval, then \(y_{1}, y_{2}\) are linearly independent in that interval.
Another way to evaluate Wronskian
\[ \begin{align} W^{\prime} = y_{1} y^{\prime \prime}_{2} - y_{2} y^{\prime \prime}_{1} = - p W \end{align} \]
Integrate and we find
\[ \begin{align} W(x) = C exp \{ - \int^{x} p(u) du \} \end{align} \]
Points classification
At some point \(z = z_{0}\), coefficient \(p(z), q(z)\) are finite and can be expressed as complex power series, then we say \(p(z), q(z)\) are analytic at \(z = z_{0}\) and \(z = z_{0}\) is an ordinary point. If \(p(z), q(z)\) or both diverge, then this point is singular point.
If ODE is singular at \(z = z_{0}\), it may still possess a non-singular (finite) solution at \(z = z_{0}\). The necessary and sufficient condition for such a solution to exist is that \((z - z_{0}) p(z), (z - z_{0})^{2} q(z)\) are both analytic at \(z = z_{0}\). Under such condition, this point is called regular singular point. If not, this point is called irregular or essential singularity.
Sometimes we want to investigate \(|z| \rightarrow \infty\), substitute \(w = 1/z\).
ordinary point
There is power series of such form
\[ \begin{align} y(z) = \sum^{\infty}_{n=0} a_{n} z^{n} \end{align} \]
The convergence radius \(R\) is the distance from \(z = 0\) to the nearest singularity point.
Substitute power series of the above form into the original ODE, we can get recurrence relation. Then by the recurrence relation, we get all coefficients.
regular singular point
Suppose \(z = 0\) is a regular singular point of the equation
\[ \begin{align} y^{\prime \prime} + p(x) y^{\prime} + q(x) y = 0 \end{align} \]
Fuch’s theorem shows that there exist at least 1 solution of the form
\[ \begin{align} y = z^{\sigma} \sum^{\infty}_{n=0} a_{n} z^{n} \end{align} \]
\(\sigma\) is real or complex number, \(a_{0} \neq 0\).
Such series is Frobenius series.
The convergence radius \(R\) is the distance from \(z = 0\) to the nearest singularity point.
Rearrange ODE and substitute Frobenius series, we obtain
\[ \begin{align} \sum^{\infty}_{n=0} \Big [ (n + \sigma)(n + \sigma - 1) + s(z) (n + \sigma) + t(z) \Big ] a_{n} z^{n} = 0 \end{align} \]
The solution is about \(z = 0\), so at this point, we get indicial equation
\[ \begin{align} \sigma (\sigma - 1) + s(0) \sigma + t(0) = 0 \end{align} \]
\(\sigma_{1} - \sigma_{2} \neq integer\)
There are two linearly independent solutions of the ODE
\[ \begin{align} y_{1}(z) &= Z^{\sigma_{1}} \sum^{\infty}_{n=0} a_{n} z^{n} \\ y_{2}(z) &= Z^{\sigma_{2}} \sum^{\infty}_{n=0} b_{n} z^{n} \\ \end{align} \]
Maybe the linear independence of solutions can be verified by the Wronskian.
\(\sigma_{1} = \sigma_{2}\)
Obviously only one solution in the form of Frobenius series can be found. We must use another method to find the second solution.
\(\sigma_{1} - \sigma_{2} = integer\)
The root of larger real part always lead to a solution. Under this condition, the root of the smaller real part may or may not lead to a second linearly independent solution. We must use another method to find the second solution.
There is no generally applicable method to find solution of this type ODE.
Homogeneous boundary condition in BVP means the values at boundary are \(0\).
In BVP, if the boundary condition is homogeneous, then there exists trivial solution \(y = 0\). However, what has practical values is non-trivial solution, which requires the determination of eigenvalues to find non-trivial solution. That is the eigenvalue problem.
I cannot understand the next step, which is already listed in any ODE textbook. I think that direct integration of this equation is apparent.↩︎